Greedy algorithms for nonnegativity-constrained simultaneous sparse recovery
نویسندگان
چکیده
منابع مشابه
Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit
A simultaneous sparse approximation problem requests a good approximation of several input signals at once using different linear combinations of the same elementary signals. At the same time, the problem balances the error in approximation against the total number of elementary signals that participate. These elementary signals typically model coherent structures in the input signals, and they...
متن کاملSparse approximation and recovery by greedy algorithms in
We study sparse approximation by greedy algorithms. We prove the Lebesgue-type inequalities for the Weak Chebyshev Greedy Algorithm (WCGA), a generalization of the Weak Orthogonal Matching Pursuit to the case of a Banach space. The main novelty of these results is a Banach space setting instead of a Hilbert space setting. The results are proved for redundant dictionaries satisfying certain cond...
متن کاملGreedy Subspace Pursuit for Joint Sparse Recovery
In this paper, we address the sparse multiple measurement vector (MMV) problem where the objective is to recover a set of sparse nonzero row vectors or indices of a signal matrix from incomplete measurements. Ideally, regardless of the number of columns in the signal matrix, the sparsity (k) plus one measurements is sufficient for the uniform recovery of signal vectors for almost all signals, i...
متن کاملPhase Transitions for Greedy Sparse Approximation Algorithms
A major enterprise in compressed sensing and sparse approximation is the design and analysis of computationally tractable algorithms for recovering sparse, exact or approximate, solutions of underdetermined linear systems of equations. Many such algorithms have now been proven using the ubiquitous Restricted Isometry Property (RIP) [9] to have optimal-order uniform recovery guarantees. However,...
متن کاملGreedy Algorithms for Sparse Reinforcement Learning
Feature selection and regularization are becoming increasingly prominent tools in the efforts of the reinforcement learning (RL) community to expand the reach and applicability of RL. One approach to the problem of feature selection is to impose a sparsity-inducing form of regularization on the learning method. Recent work on L1 regularization has adapted techniques from the supervised learning...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Signal Processing
سال: 2016
ISSN: 0165-1684
DOI: 10.1016/j.sigpro.2016.01.021